The manual forensics investigation of security incidents is an opaque process that involves the collection and\r\ncorrelation of diverse evidence. In this work we first conduct a complex experiment to expand our understanding of\r\nforensics analysis processes. During a period of 4 weeks, we systematically investigated 200 detected security\r\nincidents about compromised hosts within a large operational network. We used data from four commonly used\r\nsecurity sources, namely Snort alerts, reconnaissance and vulnerability scanners, blacklists, and a search engine, to\r\nmanually investigate these incidents. Based on our experiment, we first evaluate the (complementary) utility of the\r\nfour security data sources and surprisingly find that the search engine provided useful evidence for diagnosing many\r\nmore incidents than more traditional security sources, i.e., blacklists, reconnaissance, and vulnerability reports. Based\r\non our validation, we then identify and make publicly available a list of 165 good Snort signatures, i.e., signatures that\r\nwere effective in identifying validated malware without producing false positives. In addition, we analyze the\r\ncharacteristics of good signatures and identify strong correlations between different signature features and their\r\neffectiveness, i.e., the number of validated incidents in which a good signature is identified. Based on our experiment,\r\nwe finally introduce an IDS signature quality metric that can be exploited by security specialists to evaluate the\r\navailable rulesets, prioritize the generated alerts, and facilitate the forensics analysis processes. We apply our metric to\r\ncharacterize the most popular Snort rulesets. Our analysis of signatures is useful not only for configuring Snort but also\r\nfor establishing best practices and for teaching how to write new IDS signatures.
Loading....